Recursive orthogonal least squares learning with automatic weight selection for Gaussian neural networks

نویسندگان

  • Meng H. Fun
  • Martin T. Hagan
چکیده

Gaussian neural networks have always suffered from the curse of dimensionality; the number of weights needed increases exponentially with the number of inputs and outputs. Many methods have been proposed to solve this problem by optimally or sub-optimally selecting the weights or centers of the Gaussian neural network [1],[2]. However, most of these attempts are not suitable for online implementation. In this paper, we develop a Recursive Orthogonal Least Squares learning with Automatic Weight Selection (ROLS-AWS) for a two-layered Gaussian neural network. This ROLS-AWS algorithm is capable of selecting useful weights sub-optimally and recursively. In doing so, we will not only reduce the growth of the size of the weights but also minimizes the number of weights used. Due to the recursive nature of this algorithm, it can be applied to any online system, as in control and signal processing applications. Introduction The Radial Basis Function (RBF) network often requires many hidden nodes due to their localized character. For practical purposes, it is desirable to construct the smallest possible RBF network. Many applications that use the RBF network have opted to use fixed centers or fixed grid size to limit the number of nodes used [6],[7]. However, this approach has often leads to large weights. On the other hand, the Orthogonal Least Squares (OLS) learning algorithm proposed by S. Chen [2], is a simple and efficient algorithm for fitting the RBF network. It also has the capability to select smaller weight and to create a parsimonious network model. However, one drawback with this algorithm is that the training is done in batch mode only. In this paper, we will develop the Recursive Orthogonal Least Squares Learning with Automatic Weight Selection (ROLS-AWS) algorithm, which is based on batch orthogonal least squares learning. Generally, there are three methods to choose the centers and the variances: the fixed centers method, the self-organized learning method (e.g. k mean clustering method), and the stochastic gradient method [3]. Here we will only explore the fixed centers method. Such an approach is first described by Broomehead & Lowe [4], and was then extended by S. Chen, who developed the orthogonal least squares learning method [2]. The fixed centers method considered here is a little different than the standard method, to accommodate for the recursion in time. We first assume that each available data point will be a potential centers for the RBF network. When a data point is available, the center for that data point is available as well. Meanwhile, the variances are fixed. Such an RBF network is linear in the parameters, since all RBF centers and non-linearities in the hidden layer are fixed. In the ROLS-AWS algorithm, we will show that by combining the QR Decomposition Recursive Least Squares algorithm (QRD-RLS) and the forward subset selection method, we can create a Recursive Orthogonal Least Squares learning algorithm that can automatically select the centers and the weights. We prove that, if and are the old and new weight vectors, respectively, then the algorithm will select the optimal new weight vector given the old weight vector . We also show that this sub-optimal solution is the same solution obtained by the batch forward selection method. This algorithm not only allows the Gaussian network to learn recursively in time but also adds necessary centers and weights when needed. Furthermore, the weight size is kept to a minimum, as the RBF network begins with just a bias, but grows in size with useful centers and weights that are selected sub-optimally. Radial Basis Function (RBF) Network The RBF network considered in this paper is a standard two-layered neural network with Gaussian non-linearity in the hidden layer and linear transfer function in the output layer. The output of the RBF network is computed according to the following linear equation:

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Automatic Kernel Regression Modelling Using Combined Leave-One-Out Test Score and Regularised Orthogonal Least Squares

This paper introduces an automatic robust nonlinear identification algorithm using the leave-one-out test score also known as the PRESS (Predicted REsidual Sums of Squares) statistic and regularised orthogonal least squares. The proposed algorithm aims to achieve maximised model robustness via two effective and complementary approaches, parameter regularisation via ridge regression and model op...

متن کامل

Novel Radial Basis Function Neural Networks based on Probabilistic Evolutionary and Gaussian Mixture Model for Satellites Optimum Selection

In this study, two novel learning algorithms have been applied on Radial Basis Function Neural Network (RBFNN) to approximate the functions with high non-linear order. The Probabilistic Evolutionary (PE) and Gaussian Mixture Model (GMM) techniques are proposed to significantly minimize the error functions. The main idea is concerning the various strategies to optimize the procedure of Gradient ...

متن کامل

On the efficiency of the orthogonal least squares training method for radial basis function networks

The efficiency of the orthogonal least squares (OLS) method for training approximation networks is examined using the criterion of energy compaction. We show that the selection of basis vectors produced by the procedure is not the most compact when the approximation is performed using a nonorthogonal basis. Hence, the algorithm does not produce the smallest possible networks for a given approxi...

متن کامل

Iterative B-spline Neural Networks for Stochastic Distribution Control and Its Application in Industrial Process

Iterative learning of B-spline basis functions model for the output probability density function (PDF) control of non-Gaussian systems is studied in this paper using the recursive least square algorithm. Within each control interval, the basis functions are fixed and the control input design is performed that controls the shape of the output PDFs. However, between each control interval, periodi...

متن کامل

A mended hybrid learning algorithm for radial basis function neural networks to improve generalization capability

A two-step learning scheme for radial basis function neural networks, which combines the genetic algorithm (GA) with the hybrid learning algorithm (HLA), is proposed in this paper. It is compared with the methods of the GA, the recursive orthogonal least square algorithm (ROLSA) and another two-step learning scheme for RBF neural networks, which combines the K-means clustering with the HLA (K-m...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999